Goto

Collaborating Authors

 intuitive control


Soft robotic prosthetic hand uses nerve signals for more natural control

FOX News

The approach combines the natural coordination patterns of our fingers with the decoding of motoneuron activity in the spinal column. Recent advancements in technology have revolutionized the world of assistive and medical tools, and prosthetic limbs are no exception. We've come a long way from the rigid, purely cosmetic prosthetics of the past. Today, we're seeing the rise of softer, more realistic designs, many incorporating robotic components that significantly expand their functionality. Despite these exciting developments, a major challenge remains: How do we make these robotic limbs easier and more intuitive for users to control?


Man builds a bionic hand using AI after three years of research

Daily Mail - Science & tech

A Texan man has built his own bionic hand using artificial intelligence (AI) after three years of research. After finding most bionic hands can cost up to $150,000, Ryan Saavedra, 27, set out to create one at a fraction of the cost. The prosthetic he created, called the Globally Available Robotic Arm (GARA), measures electrical activity of muscle tissue – a method called electromyography (EMG) – and combines this with AI to predict hand movements. When attached to the limb of an amputee, it is capable of intuitive finger movements and clasping objects such as cups. Saavedra's company, Alt-Bionics, has already made a prototype that costs less than $700 (£520) to produce, and is now working to commercialise the device.


Learning User-Preferred Mappings for Intuitive Robot Control

Li, Mengxi, Losey, Dylan P., Bohg, Jeannette, Sadigh, Dorsa

arXiv.org Artificial Intelligence

When humans control drones, cars, and robots, we often have some preconceived notion of how our inputs should make the system behave. Existing approaches to teleoperation typically assume a one-size-fits-all approach, where the designers pre-define a mapping between human inputs and robot actions, and every user must adapt to this mapping over repeated interactions. Instead, we propose a personalized method for learning the human's preferred or preconceived mapping from a few robot queries. Given a robot controller, we identify an alignment model that transforms the human's inputs so that the controller's output matches their expectations. We make this approach data-efficient by recognizing that human mappings have strong priors: we expect the input space to be proportional, reversable, and consistent. Incorporating these priors ensures that the robot learns an intuitive mapping from few examples. We test our learning approach in robot manipulation tasks inspired by assistive settings, where each user has different personal preferences and physical capabilities for teleoperating the robot arm. Our simulated and experimental results suggest that learning the mapping between inputs and robot actions improves objective and subjective performance when compared to manually defined alignments or learned alignments without intuitive priors. The supplementary video showing these user studies can be found at: https://youtu.be/rKHka0_48-Q.